6 research outputs found
Variations on Multi-Core Nested Depth-First Search
Recently, two new parallel algorithms for on-the-fly model checking of LTL
properties were presented at the same conference: Automated Technology for
Verification and Analysis, 2011. Both approaches extend Swarmed NDFS, which
runs several sequential NDFS instances in parallel. While parallel random
search already speeds up detection of bugs, the workers must share some global
information in order to speed up full verification of correct models. The two
algorithms differ considerably in the global information shared between
workers, and in the way they synchronize.
Here, we provide a thorough experimental comparison between the two
algorithms, by measuring the runtime of their implementations on a multi-core
machine. Both algorithms were implemented in the same framework of the model
checker LTSmin, using similar optimizations, and have been subjected to the
full BEEM model database.
Because both algorithms have complementary advantages, we constructed an
algorithm that combines both ideas. This combination clearly has an improved
speedup. We also compare the results with the alternative parallel algorithm
for accepting cycle detection OWCTY-MAP. Finally, we study a simple statistical
model for input models that do contain accepting cycles. The goal is to
distinguish the speedup due to parallel random search from the speedup that can
be attributed to clever work sharing schemes.Comment: In Proceedings PDMC 2011, arXiv:1111.006
PARTITIONING SEARCH SPACES OF A RANDOMIZED SEARCH
This work studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance? Two approaches, simple distribution and search space partitioning as well as their combinations are investigated both analytically and empirically. It is shown that the results depend heavily on the type of the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with many solutions) as well as on how good the search space partitioning function is. In addition, the behavior of a real search space partitioning function is evaluated in the same framework. The results suggest that in practice one should combine the simple distribution and search space partitioning approaches
Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads
We present a new SAT approach, called cube-and-conquer, targeted at reducing solving time on hard instances. This two-phase approach partitions a problem into many thousands (or millions) of cubes using lookahead techniques. Afterwards, a conflict-driven solver tackles the problem, using the cubes to guide the search. On several hard SAT-competition benchmarks, our hybrid approach outperforms both lookahead and conflict-driven solvers. Moreover, because cube-and-conquer is natural to parallelise, it is a competitive alternative for solving SAT problems in parallel. This approach was originally developed for solving hard van-der-Waerden problems, and for these (hard, unsatisfiable) problems the approach is not only very well parallelisable, but outperforms all other (parallel or not) SAT-solvers in terms of total run-time by at least a factor of two
Partitioning SAT Instances for Distributed Solving
Abstract. In this paper we study the problem of solving hard propositional satisfiability problem (SAT) instances in a computing grid or cloud, where run times and communication between parallel running computations are limited. We study analytically an approach where the instance is partitioned iteratively into a tree of subproblems and each node in the tree is solved in parallel. We present new methods which combine clause learning and look-ahead to construct partitions, evaluate their efficiency experimentally, and finally demonstrate the power of the approach in a real grid environment by solving several instances that were not solved in a SAT solver competition.
Revisiting Clause Exchange in Parallel SAT Solving
International audienceManaging learnt clause database is known to be a tricky task in SAT solvers. In the portfolio framework, the collaboration between threads through learnt clause exchange makes this problem even more difficult to tackle. Several techniques have been proposed in the last few years, but practical results are still in favor of very limited collaboration, or even no collaboration at all. This is mainly due to the difficulty that each thread has to manage a large amount of learnt clauses generated by the other workers. In this paper, we propose new efficient techniques for clause exchanges within a parallel SAT solver. In contrast to most of the current clause exchange methods, our approach relies on both export and import policies, and makes use of recent techniques that proves very effective in the sequential case. Extensive experimentations show the practical interest of the proposed ideas